Experiments to find relationships
https://gyazo.com/87db2bc701ec42e6cb700c3c3b0b0c4e
Experiments to find relationships
There is a possibility that AI can assist in "finding a relationship" in this...
Experiment 0, 6 keywords in omni-private
→Since the search target data already contains sentences that consider the relationship between keywords, they are picked up and reproduced.
So I decided to use raw GPT4.
Experiment 1, give them keyword X and ask them to list 6 keywords related to it.
If the output is contrary to expectations at this time, the keyword by itself does not specify the "human intended meaning," so add a supplement.
It means increasing the resolution because the human side has expressed low resolution due to lack of knowledge.
Example: LLM
nishio.iconWhat is LLM?
gpt.iconLLM" stands for "Master of Laws" and refers to a master's degree in law....
Then have them describe the relationship between each X
If a relationship comes up that I'm not yet aware of, it's a success.
nishio.iconGiven keyword X, answer 6 different keywords related to X.
X=LLM(Large Language Model)
gpt.iconCertainly! Here are six keywords related to "LLM (Large Language Model)":
Natural Language Processing (NLP)
OpenAI
GPT (Generative Pre-trained Transformer)
Machine Learning
Text Generation
Deep Learning
nishio.iconX=Plurality(related to Digital Democracy)
gpt.iconCertainly! Here are six keywords related to "Plurality (related to Digital Democracy)":
Direct Democracy
E-Governance
Online Voting
Digital Participation
Civic Technology
Electronic Referendum
nishio.iconX="World ID"(A more human internet with global proof of personhood. Privacy‑First. Self‑custodial. Decentralized.)
gpt.iconCertainly! Here are six keywords related to "World ID (A more human internet with global proof of personhood. Privacy‑First. Self‑custodial. Decentralized.)":
Digital Identity
Self-Sovereign Identity
Decentralized Autonomous Organizations (DAO)
Blockchain Technology
Privacy-Preserving Systems
Digital Sovereignty
nishio.iconX=multiple subjectivity
gpt.iconThe term "subjectivity of a large number" is a combination of Japanese words: "大勢" (taisei) meaning "general trend" or "majority" and "主観" (shukan) meaning "subjectivity" or " subjective view".
Considering that context, here are six keywords related to "Majority's Subjectivity".
Public Opinion
Groupthink
Popular Will or Public Sentiment
Mainstream Values
Minority Opinion
Social Pressure
nishio.iconX=Jiro Kawakita, who made KJ-method
gpt.iconJiro Kawakita is known for the KJ method (also known as the Affinity Diagram or Kawakita Jiro method). The method is employed to synthesize large volumes of information by grouping and categorizing related facts.
Here are six keywords related to "Jiro Kawakita" (Jiro Kawakita) and the KJ method: Jiro Kawakita
KJ Method
Affinity
Information Organization
Brainstorming
Group Work
Problem Solving
nishio.iconX=Eugene T. Gendlin
gpt.iconEugene T. Gendlin was an American philosopher and psychotherapist who developed the process known as "Focusing." This process emphasized the body's inherent wisdom and the role of bodily sensations in our understanding and expression of emotions.
Here are six keywords related to "Eugene T. Gendlin":
Focusing
Experiential Psychotherapy
Felt Sense
Client-Centered Therapy
Carl Rogers (since Gendlin collaborated extensively with him)
Philosophy of the Implicit
nishio.iconFind relationship between Xs
gpt.iconLet's explore the relationship between the various "X" topics you've mentioned:
LLM (Large Language Model):
This pertains to advanced machine learning models that handle language tasks, like OpenAI's GPT series.
Plurality (related to Digital Democracy):
This topic concerns the majority's voice or opinion in a digital democratic system, and how technology can facilitate or hinder representation.
World ID:
This is about a decentralized, privacy-first digital identity. While it's not directly related to language models or digital democracy, it represents a tech-driven solution for ensuring privacy and self-sovereignty.
Majority's Subjectivity.
This has thematic similarities with "Plurality", as both discuss the views or opinions of the majority. In democratic systems, the majority's subjectivity can play a role in decision-making processes.
Jiro Kawakita:.
Known for the KJ method, Jiro Kawakita's work relates to information synthesis. While it doesn't directly tie into the other themes, its focus on categorizing and understanding information could be seen as a manual counterpart to what LLMs do algorithmically.
Eugene T. Gendlin:
Gendlin's work revolves around understanding and expressing emotions through a connection with our bodily sensations. This is more psychological and philosophical than the other topics, which are more technologically oriented.
Relationships:
Technological Progress: LLM, World ID, and to some extent Digital Democracy signify advancements in technology and how they affect various facets of our lives.
Understanding and Organizing Information: LLM, KJ-method, and Focusing (from Gendlin's work) are all methods/tools for understanding and organizing either data (information) or human emotions (inner experiences).
Majority and Representation: Plurality in Digital Democracy and the Subjectivity of the Majority both touch on the theme of the majority's views, opinions, or subjectivity.
Personal Understanding and Expression: Both Gendlin's work on Focusing and the KJ method revolve around introspection, understanding, and expression, whether it's emotional or information-based.
In summary, while not all of these topics are directly related, they intersect at various points, especially around the themes of technology's role in our lives, the importance of understanding and representation, and methods for introspection and organization.
impressions
interesting
When I myself develop a new idea, I implicitly use my native Japanese, but this time I let GPT4, who is good at English, think in English, so some unexpected keywords come up.
"information synthesis."
Experiment 2
I'll put it in Kozaneba.
Awareness of the process
Ah, you're interpreting "many" as "Taisei".
X and 6 keywords
https://gyazo.com/4ba44307a189310de4b9904f5d5123b4
Relationships
https://gyazo.com/e39158ccb0b1aa2919a0dfc18b1e175d
I'll include an explanation.
https://gyazo.com/527c43d72efa0285d352f3b21f65c3be
There's a valley between "the subjectivity of the many" and "Focusing."
Think about whether you can bridge the valley while expanding on the keywords you are interested in.
introspection
inner experience
Public Sentiment - Mainstream Value - Mainstream Value
This connection itself is interesting.
Connected.
https://gyazo.com/b0c1a8af18f7f87fefda48d68aebf480
Sentiment is, of course, emotion.
And so far, only HUMAN can have EMOTION?
inner experience came in the middle.
https://gyazo.com/8c90b935b173b02a825269a40990b668
In other words, what is not yet verbalized is how to connect INNER EXPERIENCE to LLM and Plurality.
nishio.iconFind relationship from "inner experiences" to each of "LLM" and "Plurality"
gpt.iconLet's delve into the relationship between "inner experiences" and both "LLM" and "Plurality":
Inner Experiences to LLM (Large Language Model):
Connection: "Inner experiences" refer to a person's subjective feelings, thoughts, emotions, and intuitions. LLMs, like OpenAI's GPT series, are designed to understand, generate, and respond to human language, which often conveys these inner experiences.
Understanding and Mirroring: While LLMs don't have emotions or consciousness, they can be trained on vast amounts of text that describe human emotions and sentiments. As a result, LLMs can generate language that mirrors or resembles human inner experiences.
Therapeutic Uses: LLMs can be utilized in therapeutic settings as tools to help individuals express and understand their inner experiences, even if the models don't truly "understand" emotions in a human way.
Bias and Representation: An LLM's training data could be biased or may not fully encompass the vast array of human inner experiences. This limitation makes it essential for users to be discerning when interpreting LLM responses related to deep emotional or personal topics.
Inner Experiences to Plurality (related to Digital Democracy):
Connection: "Inner experiences" in the context of plurality refer to the individual subjective feelings and opinions of members within a larger group or majority. In a digital democracy, these experiences can influence or be influenced by the collective voice.
Individual vs. Collective Voice: While "plurality" usually focuses on the voice of the majority, it's made up of individual inner experiences and opinions. The challenge is ensuring individual voices don't get lost in the collective, especially when they might offer important insights or dissenting views.
Digital Platforms as Expression Tools: Digital democracy tools can be platforms for individuals to express their inner experiences and opinions, contributing to the broader dialogue.
Feedback and Change: When a digital democracy system effectively captures the inner experiences and sentiments of its participants, it can lead to more responsive governance and policies that better reflect the needs and feelings of the population.
In conclusion, both LLMs and concepts of Plurality in Digital Democracy interact with the idea of "inner experiences" in distinct ways. LLMs attempt to understand and reproduce language related to human emotions and sentiments, whereas Plurality in Digital Democracy grapples with the challenge of representing and valuing individual sentiments within the collective voice.
update
I like intuitions.
Client-Centered Therapy was pulled out of Therapeutic Uses.
https://gyazo.com/078e5219e9cb423a7bff9fbb0ad91824
Bringing INSIGHT and DISSENTING VIEW
https://gyazo.com/ed8a11003c8e7b8cf6c0823fc66e1c39
whole
https://gyazo.com/22e932c702e4aa8dd262bd48b5d3fc15
In other words, LLM's Therapeutic Uses bring insight and a dissenting view to people's unspoken Inner Experiences, which contributes to plurality in the form of more responsive governance. This contributes to Plurality in the form of more responsive governance.
That would be an extension of Keichobot.
It is based on a therapy called Clean Language, which is like a variant of Focusing, but you might want to look deeper into Focusing itself
looking back
LLM (GPT4) is able to "find connections" and "create food for thought".
People are not catching it.
Kozaneba is a tool I developed based on my years of experience with the KJ method.
I think the effectiveness of this tool is very well illustrated by the following line
Think about whether we can bridge the valley while expanding the keywords we are interested in.
Visualization of connections will show "where connections are lacking".
Focusing our thinking to find connections "where connections have not yet been found"
The "instrumentalization of thinking" is occurring by expanding the keywords out of curiosity.
Inner experience," which had not been verbalized at all in the beginning, grew to become the most important keyword.
It was nice to see INDIVIDUAL come out of the woodwork.
EXPERIENCE is fundamentally personal, so we need to draw it out and convert it into something distributable.
consideration
Already better at "finding relationships" than most people in GPT4 at this time.
This "good at" means "able to come up with a lot of ideas".
Same as evaluation by brainstorming quantity.
Unlike humans, they don't have the psychological resistance of wondering if it's okay to say something like this, so they are really good at coming up with ideas.
Lack of ability on the part of humans to understand the "many proposals put forth".
Especially in one-dimensional text format, we quickly hit the limits of our cognitive abilities.
The two-dimensional arrangement by Kozaneba and the expansion of the relational lines and important fragments, etc., contribute to the cognitive ability.
Difficult to notice "what is not written" in text format; if you draw relational lines, you can see "where there are no lines".
Weighting the "many proposals" is a subjective human interpretation.
In the "adopt/discard" choice, people tend to choose to keep because they are afraid of discarding, thereby carrying elements beyond their capacity.
The strategy of "magnifying what matters" has allowed us to allocate more of our attention to the good stuff without explicit discarding
---
This page is auto-translated from /nishio/関係を見出す実験 using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I'm very happy to spread my thought to non-Japanese readers.